56 research outputs found

    3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection

    Full text link
    Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction

    PIXHAWK: A micro aerial vehicle design for autonomous flight using onboard computer vision

    Get PDF
    We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP proble

    A 4-point Algorithm for Relative Pose Estimation of a Calibrated Camera with a Known Relative Rotation Angle

    Get PDF
    Abstract-We propose an algorithm to estimate the relative camera pose using four feature correspondences and one relative rotation angle measurement. The algorithm can be used for relative pose estimation of a rigid body equipped with a camera and a relative rotation angle sensor which can be either an odometer, an IMU or a GPS/INS system. This algorithm exploits the fact that the relative rotation angles of both the camera and relative rotation angle sensor are the same as the camera and sensor are rigidly mounted to a rigid body. Therefore, knowledge of the extrinsic calibration between the camera and sensor is not required. We carry out a quantitative comparison of our algorithm with the well-known 5-point and 1-point algorithms, and show that our algorithm exhibits the highest level of accuracy

    An ortho­rhom­bic polymorph of 1-benzyl-1H-benzimidazole

    Get PDF
    The title compound, C14H12N2, in contrast to the previously reported monoclinic polymorph [Lei et al. (2009 ▶). Acta Cryst. E65, o2613], crystallizes in the ortho­rhom­bic crystal system. The dihedral angle between the imidazole ring system and the phenyl ring is 76.78 (16)°. Weak C—H⋯N and C—H⋯π inter­actions are observed in the crystal structure

    Two temperate super-Earths transiting a nearby late-type M dwarf

    Full text link
    peer reviewedIn the age of JWST, temperate terrestrial exoplanets transiting nearby late-type M dwarfs provide unique opportunities for characterising their atmospheres, as well as searching for biosignature gases. We report here the discovery and validation of two temperate super-Earths transiting LP 890-9 (TOI-4306, SPECULOOS-2), a relatively low-activity nearby (32 pc) M6V star. The inner planet, LP 890-9b, was first detected by TESS (and identified as TOI-4306.01) based on four sectors of data. Intensive photometric monitoring of the system with the SPECULOOS Southern Observatory then led to the discovery of a second outer transiting planet, LP 890-9c (also identified as SPECULOOS-2c), previously undetected by TESS. The orbital period of this second planet was later confirmed by MuSCAT3 follow-up observations. With a mass of 0.118±0.002 M⊙, a radius of 0.1556±0.0086 R⊙, and an effective temperature of 2850±75 K, LP 890-9 is the second-coolest star found to host planets, after TRAPPIST-1. The inner planet has an orbital period of 2.73 d, a radius of 1.320+0.053−0.027 R⊕, and receives an incident stellar flux of 4.09±0.12 S⊕. The outer planet has a similar size of 1.367+0.055−0.039 R⊕ and an orbital period of 8.46 d. With an incident stellar flux of 0.906 ± 0.026 S⊕, it is located within the conservative habitable zone, very close to its inner limit. Although the masses of the two planets remain to be measured, we estimated their potential for atmospheric characterisation via transmission spectroscopy using a mass-radius relationship and found that, after the TRAPPIST-1 planets, LP 890-9c is the second-most favourable habitable-zone terrestrial planet known so far. The discovery of this remarkable system offers another rare opportunity to study temperate terrestrial planets around our smallest and coolest neighbours

    Real-Time Direct Dense Matching on Fisheye Images Using Plane-Sweeping Stereo

    No full text
    In this paper, we propose an adaptation of camera pro-jection models for fisheye cameras into the plane-sweeping stereo matching algorithm. This adaptation allows us to do plane-sweeping stereo directly on fisheye images. Our ap-proach also works for other non-pinhole cameras such as omnidirectional and catadioptric cameras when using the unified projection model. Despite the simplicity of our pro-posed approach, we are able to obtain full, good quality and high resolution depth maps from the fisheye images. To verify our approach, we show experimental results based on depth maps generated by our approach, and dense models produced from these depth maps. 1

    Continuous-time Radar-inertial Odometry for Automotive Radars

    No full text
    We present an approach for radar-inertial odometry which uses a continuous-time framework to fuse measurements from multiple automotive radars and an inertial measurement unit (IMU). Adverse weather conditions do not have a significant impact on the operating performance of radar sensors unlike that of camera and LiDAR sensors. Radar's robustness in such conditions and the increasing prevalence of radars on passenger vehicles motivate us to look at the use of radar for ego-motion estimation. A continuous-time trajectory representation is applied not only as a framework to enable heterogeneous and asynchronous multi-sensor fusion, but also, to facilitate efficient optimization by being able to compute poses and their derivatives in closed-form and at any given time along the trajectory. We compare our continuous-time estimates to those from a discrete-time radar-inertial odometry approach and show that our continuous-time method outperforms the discrete-time method. To the best of our knowledge, this is the first time a continuous-time framework has been applied to radar-inertial odometry.Comment: In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS

    Vision-Based Autonomous Mapping and Exploration Using a Quadrotor MAV

    No full text
    Abstract — In this paper, we describe our autonomous visionbased quadrotor MAV system which maps and explores unknown environments. All algorithms necessary for autonomous mapping and exploration run on-board the MAV. Using a frontlooking stereo camera as the main exteroceptive sensor, our quadrotor achieves these capabilities with both the Vector Field Histogram+ (VFH+) algorithm for local navigation, and the frontier-based exploration algorithm. In addition, we implement the Bug algorithm for autonomous wall-following which could optionally be selected as the substitute exploration algorithm in sparse environments where the frontier-based exploration under-performs. We incrementally build a 3D global occupancy map on-board the MAV. The map is used by the VFH+ and frontier-based exploration in dense environments, and the Bug algorithm for wall-following in sparse environments. During the exploration phase, images from the front-looking camera are transmitted over Wi-Fi to the ground station. These images are input to a large-scale visual SLAM process running off-board on the ground station. SLAM is carried out with pose-graph optimization and loop closure detection using a vocabulary tree. We improve the robustness of the pose estimation by fusing optical flow and visual odometry. Optical flow data is provided by a customized downward-looking camera integrated with a microcontroller while visual odometry measurements are derived from the front-looking stereo camera. We verify our approaches with experimental results. I
    corecore